71 research outputs found
Cascade R-CNN: Delving into High Quality Object Detection
In object detection, an intersection over union (IoU) threshold is required
to define positives and negatives. An object detector, trained with low IoU
threshold, e.g. 0.5, usually produces noisy detections. However, detection
performance tends to degrade with increasing the IoU thresholds. Two main
factors are responsible for this: 1) overfitting during training, due to
exponentially vanishing positive samples, and 2) inference-time mismatch
between the IoUs for which the detector is optimal and those of the input
hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is
proposed to address these problems. It consists of a sequence of detectors
trained with increasing IoU thresholds, to be sequentially more selective
against close false positives. The detectors are trained stage by stage,
leveraging the observation that the output of a detector is a good distribution
for training the next higher quality detector. The resampling of progressively
improved hypotheses guarantees that all detectors have a positive set of
examples of equivalent size, reducing the overfitting problem. The same cascade
procedure is applied at inference, enabling a closer match between the
hypotheses and the detector quality of each stage. A simple implementation of
the Cascade R-CNN is shown to surpass all single-model object detectors on the
challenging COCO dataset. Experiments also show that the Cascade R-CNN is
widely applicable across detector architectures, achieving consistent gains
independently of the baseline detector strength. The code will be made
available at https://github.com/zhaoweicai/cascade-rcnn
Recommended from our members
Towards Universal Object Detection
Object detection is one of the most important and challenging research topics in computer vision. It is playing an important role in our everyday life and has many applications, e.g. surveillance, autonomous driving, robotics, drone, medical imaging, etc. The ultimate goal of object detection is a universal object detector that can work very well in any case under any condition like human vision system. However, there are multiple challenges on the universality of object detection, e.g. scale-variance, high-quality requirement, domain shift, computational constraint, etc. These will prevent the object detector from being widely used for various scales of objects, critical applications requiring extremely accurate localization, scenarios with changing domain priors, and diverse hardware settings. To address these challenges, multiple solutions have been proposed in this thesis. These include an efficient multi-scale architecture to achieve scale-invariant detection, a robust multi-stage framework effective for high-quality requirement, a cross-domain solution to extend the universality over various domains, and a design of complexity-aware cascades and a novel low-precision network to enhance the universality under different computational constraints. All these efforts have substantially improved the universality of object detection, and the advanced object detector can be applied to broader environments
Learning Complexity-Aware Cascades for Deep Pedestrian Detection
The design of complexity-aware cascaded detectors, combining features of very
different complexities, is considered. A new cascade design procedure is
introduced, by formulating cascade learning as the Lagrangian optimization of a
risk that accounts for both accuracy and complexity. A boosting algorithm,
denoted as complexity aware cascade training (CompACT), is then derived to
solve this optimization. CompACT cascades are shown to seek an optimal
trade-off between accuracy and complexity by pushing features of higher
complexity to the later cascade stages, where only a few difficult candidate
patches remain to be classified. This enables the use of features of vastly
different complexities in a single detector. In result, the feature pool can be
expanded to features previously impractical for cascade design, such as the
responses of a deep convolutional neural network (CNN). This is demonstrated
through the design of a pedestrian detector with a pool of features whose
complexities span orders of magnitude. The resulting cascade generalizes the
combination of a CNN with an object proposal mechanism: rather than a
pre-processing stage, CompACT cascades seamlessly integrate CNNs in their
stages. This enables state of the art performance on the Caltech and KITTI
datasets, at fairly fast speeds
UA-DETRAC: A New Benchmark and Protocol for Multi-Object Detection and Tracking
In recent years, numerous effective multi-object tracking (MOT) methods are
developed because of the wide range of applications. Existing performance
evaluations of MOT methods usually separate the object tracking step from the
object detection step by using the same fixed object detection results for
comparisons. In this work, we perform a comprehensive quantitative study on the
effects of object detection accuracy to the overall MOT performance, using the
new large-scale University at Albany DETection and tRACking (UA-DETRAC)
benchmark dataset. The UA-DETRAC benchmark dataset consists of 100 challenging
video sequences captured from real-world traffic scenes (over 140,000 frames
with rich annotations, including occlusion, weather, vehicle category,
truncation, and vehicle bounding boxes) for object detection, object tracking
and MOT system. We evaluate complete MOT systems constructed from combinations
of state-of-the-art object detection and object tracking methods. Our analysis
shows the complex effects of object detection accuracy on MOT system
performance. Based on these observations, we propose new evaluation tools and
metrics for MOT systems that consider both object detection and object tracking
for comprehensive analysis.Comment: 18 pages, 11 figures, accepted by CVI
Masked Vision and Language Modeling for Multi-modal Representation Learning
In this paper, we study how to use masked signal modeling in vision and
language (V+L) representation learning. Instead of developing masked language
modeling (MLM) and masked image modeling (MIM) independently, we propose to
build joint masked vision and language modeling, where the masked signal of one
modality is reconstructed with the help from another modality. This is
motivated by the nature of image-text paired data that both of the image and
the text convey almost the same information but in different formats. The
masked signal reconstruction of one modality conditioned on another modality
can also implicitly learn cross-modal alignment between language tokens and
image patches. Our experiments on various V+L tasks show that the proposed
method not only achieves state-of-the-art performances by using a large amount
of data, but also outperforms the other competitors by a significant margin in
the regimes of limited training data
Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark
Most existing works on few-shot object detection (FSOD) focus on a setting
where both pre-training and few-shot learning datasets are from a similar
domain. However, few-shot algorithms are important in multiple domains; hence
evaluation needs to reflect the broad applications. We propose a Multi-dOmain
Few-Shot Object Detection (MoFSOD) benchmark consisting of 10 datasets from a
wide range of domains to evaluate FSOD algorithms. We comprehensively analyze
the impacts of freezing layers, different architectures, and different
pre-training datasets on FSOD performance. Our empirical results show several
key factors that have not been explored in previous works: 1) contrary to
previous belief, on a multi-domain benchmark, fine-tuning (FT) is a strong
baseline for FSOD, performing on par or better than the state-of-the-art (SOTA)
algorithms; 2) utilizing FT as the baseline allows us to explore multiple
architectures, and we found them to have a significant impact on down-stream
few-shot tasks, even with similar pre-training performances; 3) by decoupling
pre-training and few-shot learning, MoFSOD allows us to explore the impact of
different pre-training datasets, and the right choice can boost the performance
of the down-stream tasks significantly. Based on these findings, we list
possible avenues of investigation for improving FSOD performance and propose
two simple modifications to existing algorithms that lead to SOTA performance
on the MoFSOD benchmark. The code is available at
https://github.com/amazon-research/few-shot-object-detection-benchmark.Comment: Accepted at ECCV 202
- …